Safeguarded Learned Convex Optimization
نویسندگان
چکیده
Applications abound in which optimization problems must be repeatedly solved, each time with new (but similar) data. Analytic algorithms can hand-designed to provably solve these an iterative fashion. On one hand, data-driven "learn optimize" (L2O) much fewer iterations and similar cost per iteration as general-purpose algorithms. the other unfortunately, many L2O lack converge guarantees. To fuse advantages of approaches, we present a Safe-L2O framework. updates incorporate safeguard guarantee convergence for convex proximal and/or gradient oracles. The is simple computationally cheap implement, it activated only when would perform poorly or appear diverge. This yields numerical benefits employing machine learning create rapid while still guaranteeing convergence. Our examples show algorithms, even provided data not from distribution training
منابع مشابه
Convex Optimization
In this paper we propose an alternative solution to rl-blodr 1' problems. This altemativeis based upon the idea of transforming the I' problem into an equivalent (in the sense of having the same solution) mixed ll/'Hm problem that can be solved using convex optimieation techniques. The proposed algorithm has the advantage of generating, at each step, an upper bound of the cost that converges un...
متن کاملA Safeguarded Teleoperation Controller
This paper presents a control system for mobile robots. The controller was developed to satisfy the needs of a wide range of operator interfaces and teleoperation in unknown, unstructured environments. In particular, the controller supports varying degrees of cooperation between the operator and robot, from direct to supervisory control. The controller has a modular architecture and includes in...
متن کاملBeyond Convex Optimization: Star-Convex Functions
We introduce a polynomial time algorithm for optimizing the class of star-convex functions, under no Lipschitz or other smoothness assumptions whatsoever, and no restrictions except exponential boundedness on a region about the origin, and Lebesgue measurability. The algorithm’s performance is polynomial in the requested number of digits of accuracy and the dimension of the search domain. This ...
متن کاملDistributed Convex Optimization with Many Convex Constraints
We address the problem of solving convex optimization problems with many convex constraints in a distributed setting. Our approach is based on an extension of the alternating direction method of multipliers (ADMM) that recently gained a lot of attention in the Big Data context. Although it has been invented decades ago, ADMM so far can be applied only to unconstrained problems and problems with...
متن کاملOn convex optimization without convex representation
We consider the convex optimization problem P : minx{f(x) : x ∈ K} where f is convex continuously differentiable, and K ⊂ R is a compact convex set with representation {x ∈ R : gj(x) ≥ 0, j = 1, . . . ,m} for some continuously differentiable functions (gj). We discuss the case where the gj ’s are not all concave (in contrast with convex programming where they all are). In particular, even if th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i6.25950